Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 2.090
Filter
1.
Trends Hear ; 28: 23312165241246616, 2024.
Article in English | MEDLINE | ID: mdl-38656770

ABSTRACT

Negativity bias is a cognitive bias that results in negative events being perceptually more salient than positive ones. For hearing care, this means that hearing aid benefits can potentially be overshadowed by adverse experiences. Research has shown that sustaining focus on positive experiences has the potential to mitigate negativity bias. The purpose of the current study was to investigate whether a positive focus (PF) intervention can improve speech-in-noise abilities for experienced hearing aid users. Thirty participants were randomly allocated to a control or PF group (N = 2 × 15). Prior to hearing aid fitting, all participants filled out the short form of the Speech, Spatial and Qualities of Hearing scale (SSQ12) based on their own hearing aids. At the first visit, they were fitted with study hearing aids, and speech-in-noise testing was performed. Both groups then wore the study hearing aids for two weeks and sent daily text messages reporting hours of hearing aid use to an experimenter. In addition, the PF group was instructed to focus on positive listening experiences and to also report them in the daily text messages. After the 2-week trial, all participants filled out the SSQ12 questionnaire based on the study hearing aids and completed the speech-in-noise testing again. Speech-in-noise performance and SSQ12 Qualities score were improved for the PF group but not for the control group. This finding indicates that the PF intervention can improve subjective and objective hearing aid benefits.


Subject(s)
Correction of Hearing Impairment , Hearing Aids , Noise , Persons With Hearing Impairments , Speech Intelligibility , Speech Perception , Humans , Male , Female , Aged , Noise/adverse effects , Middle Aged , Correction of Hearing Impairment/instrumentation , Persons With Hearing Impairments/rehabilitation , Persons With Hearing Impairments/psychology , Perceptual Masking , Hearing Loss/rehabilitation , Hearing Loss/psychology , Hearing Loss/diagnosis , Audiometry, Speech , Surveys and Questionnaires , Aged, 80 and over , Time Factors , Acoustic Stimulation , Hearing , Treatment Outcome
2.
Cogn Res Princ Implic ; 9(1): 25, 2024 Apr 23.
Article in English | MEDLINE | ID: mdl-38652383

ABSTRACT

The use of face coverings can make communication more difficult by removing access to visual cues as well as affecting the physical transmission of speech sounds. This study aimed to assess the independent and combined contributions of visual and auditory cues to impaired communication when using face coverings. In an online task, 150 participants rated videos of natural conversation along three dimensions: (1) how much they could follow, (2) how much effort was required, and (3) the clarity of the speech. Visual and audio variables were independently manipulated in each video, so that the same video could be presented with or without a superimposed surgical-style mask, accompanied by one of four audio conditions (either unfiltered audio, or audio-filtered to simulate the attenuation associated with a surgical mask, an FFP3 mask, or a visor). Hypotheses and analyses were pre-registered. Both the audio and visual variables had a statistically significant negative impact across all three dimensions. Whether or not talkers' faces were visible made the largest contribution to participants' ratings. The study identifies a degree of attenuation whose negative effects can be overcome by the restoration of visual cues. The significant effects observed in this nominally low-demand task (speech in quiet) highlight the importance of the visual and audio cues in everyday life and that their consideration should be included in future face mask designs.


Subject(s)
Cues , Speech Perception , Humans , Adult , Female , Male , Young Adult , Speech Perception/physiology , Visual Perception/physiology , Masks , Adolescent , Speech/physiology , Communication , Middle Aged , Facial Recognition/physiology
3.
Brain Res ; : 148949, 2024 Apr 17.
Article in English | MEDLINE | ID: mdl-38641266

ABSTRACT

Automatic parsing of syntactic information by the human brain is a well-established phenomenon, but its mechanisms remain poorly understood. Its best-known neurophysiological reflection is early left-anterior negativity (ELAN) ERP component with two alternative hypotheses for its origin: (1) error detection, or (2) morphosyntactic prediction/priming. To test these alternatives, we conducted two experiments using a non-attend passive design with visual distraction and recorded ERPs to spoken pronoun-verb phrases and the same critical verbs presented in isolation without pronouns. The results revealed an ELAN at ∼130-220 ms for pronoun-verb gender agreement violations, confirming a high degree of automaticity in early morphosyntactic parsing. Critically, the strongest ELAN was elicited by verbs outside phrasal context, which suggests that the typical ELAN pattern is underpinned by a reduction of ERP amplitudes for felicitous combinations, reflecting syntactic priming/predictability between related words/morphemes (potentially mediated by associative links formed during previous linguistic experience) rather than specialized error-detection processes.

4.
Indian J Otolaryngol Head Neck Surg ; 76(2): 1498-1502, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38566641

ABSTRACT

PURPOSE: The current study was taken up to estimate the prevalence of Single Sided Deafness (SSD) in a tertiary healthcare hospital in Mumbai. Also to determine the acceptance of Contralateral routing of offside signal (CROS) devices by individuals with SSD. METHOD: A retrospective study design was followed to collect data from September 2020 to September 2022. The data were collected by reviewing the files of the cases diagnosed with Single-Sided Deafness in a tertiary health care hospital in Mumbai. RESULT: The prevalence of SSD was found to be 24% for the given period. It was observed that 50 out of 4456 took a free trial and 2 bought the device. CONCLUSION: The poor acceptance and purchase of CROS are attributed to the cost, and missed benefits of true binaural hearing in localization and hearing in noise. These benefits can be achieved by using CI, however, the cost and fear of surgery refrained the participants. Also, participants were observed to learn great use of communication repair strategies. Therefore, it is required to counsel individuals with SSD for CI and provide knowledge about care for the better with regular audiological follow-up.

5.
Infant Behav Dev ; 75: 101935, 2024 Apr 02.
Article in English | MEDLINE | ID: mdl-38569416

ABSTRACT

This paper provides a selective overview of some of the research that has followed from the publication of Werker and Tees (1984a) "Cross-language speech perception: Evidence for Perceptual Reorganization During the First Year of Life." Specifically, I briefly present the original finding, our interpretation of its meaning, and some key replications and extensions. I then review some of the work that has followed, including work with different kinds of populations, different kinds of speech sound contrasts, as well as attunement (perceptual reorganization) to additional properties of language beyond phonetic contrasts. Included is the body of work that queries whether perceptual attunement is a critical period phenomenon. Potential learning mechanisms for how experience functions to guide phonetic perceptual development are also presented, as is work on the relation between speech perception and word learning.

6.
Int J Audiol ; : 1-8, 2024 Apr 01.
Article in English | MEDLINE | ID: mdl-38557258

ABSTRACT

OBJECTIVE: Speech-in-noise perception is dependent on the interaction between sensory and cognitive processes. One factor that can relate to both processes is bilingualism. This study aimed to determine the correlation between auditory-working-memory and speech-in-noise in Persian monolinguals and Kurdish-Persian bilinguals. DESIGN: Speech-in-noise tests (sentences-in-noise and syllables-in-noise) and auditory-working-memory tests (forward and backward digit span, and n-back) were performed. STUDY SAMPLE: Participants were 48 Kurdish-Persian bilinguals with a mean age of 24 (±4) years and 48 Persian monolinguals with a mean age of 25 (±2) years with normal hearing. RESULTS: Both language groups scored within normal limits in all memory and speech-in-noise tests. However, bilinguals performed significantly worse than monolinguals on all auditory-working-memory tests and the sentences-in-noise test. Monolinguals outperformed bilinguals in sentences-in-noise test (∼1.5 dB difference) and all auditory-working-memory tests (∼1 digit difference). The two groups did not significantly differ in syllables-in-noise test. Both groups had a significant correlation between working memory capacity and sentences-in-noise test. However, no significant correlation was found between syllables-in-noise and working memory capacity at any SNR. CONCLUSIONS: Cognitive factors such as auditory working memory appear to correlate with speech-in-noise perception ability (at least at the sentence level) in monolingual and bilinguals young adults.

7.
Clin Linguist Phon ; : 1-17, 2024 Apr 01.
Article in English | MEDLINE | ID: mdl-38560916

ABSTRACT

The literature reports contradictory results regarding the influence of visual cues on speech perception tasks in children with phonological disorder (PD). This study aimed to compare the performance of children with (n = 15) and without PD (n = 15) in audiovisual perception task in voiceless fricatives. Assuming that PD could be associated with an inability to integrate phonological information from two sensory sources, we presumed that children with PD would present difficulties in integrating auditory and visual cues compared to typical children. A syllable identification task was conducted. The stimuli were presented according to four conditions: auditory-only (AO); visual-only (VO); audiovisual congruent (AV+); and audiovisual incongruent (AV-). The percentages of correct answers and the respective reaction times in the AO, VO, and AV+ conditions were considered for the analysis. The correct percentage of auditory stimuli was considered for the AV- condition, as well as the percentage of perceptual preference: auditory, visual, and/or illusion (McGurk effect), with the respective reaction time. In comparing the four conditions, children with PD presented a lower number of correct answers and longer reaction time than children with typical development, mainly for the VO. Both groups showed a preference for auditory stimuli for the AV- condition. However, children with PD showed higher percentages for visual perceptual preference and the McGurk effect than typical children. The superiority of typical children over PD children in auditory-visual speech perception depends on type of stimuli and condition of presentation.

8.
Braz J Otorhinolaryngol ; 90(4): 101427, 2024 Mar 25.
Article in English | MEDLINE | ID: mdl-38608635

ABSTRACT

OBJECTIVES: This study aimed to investigate the effects of an adhesive bone conduction device (aBCD) in children with congenital single-sided deafness (SSD). Specifically, we examined whether the aBCD elicits improvement in the speech perception ability of children with congenital SSD and whether using this device would adversely affect the horizontal localisation abilities of these children. METHODS: Thirteen school-aged children with SSD and seven children with Normal Hearing (NH) were included in this study. Speech perception in noise was measured using the Mandarin Speech Test Materials and sound localisation performance was evaluated using broadband noise stimuli (0.5-20 kHz), randomly played from seven loudspeakers at different stimulus levels (65-, 70-, and 75-dB SPL). RESULTS: All children with SSD showed inferior speech perception and sound localisation performance compared with children with NH. The aBCD use remarkably improved the speech perception abilities of these children under quiet and noise conditions; however, their sound localisation abilities neither improved nor deteriorated. CONCLUSION: This study reveals the effectiveness and safety of a non-surgical aBCD in paediatric patients with SSD. Our results provide a theoretical basis for early hearing intervention with an aBCD in children with congenital SSD who are temporarily unable to undergo ear surgery. LEVEL OF EVIDENCE: Level 3.

9.
Anim Cogn ; 27(1): 34, 2024 Apr 16.
Article in English | MEDLINE | ID: mdl-38625429

ABSTRACT

Humans have an impressive ability to comprehend signal-degraded speech; however, the extent to which comprehension of degraded speech relies on human-specific features of speech perception vs. more general cognitive processes is unknown. Since dogs live alongside humans and regularly hear speech, they can be used as a model to differentiate between these possibilities. One often-studied type of degraded speech is noise-vocoded speech (sometimes thought of as cochlear-implant-simulation speech). Noise-vocoded speech is made by dividing the speech signal into frequency bands (channels), identifying the amplitude envelope of each individual band, and then using these envelopes to modulate bands of noise centered over the same frequency regions - the result is a signal with preserved temporal cues, but vastly reduced frequency information. Here, we tested dogs' recognition of familiar words produced in 16-channel vocoded speech. In the first study, dogs heard their names and unfamiliar dogs' names (foils) in vocoded speech as well as natural speech. In the second study, dogs heard 16-channel vocoded speech only. Dogs listened longer to their vocoded name than vocoded foils in both experiments, showing that they can comprehend a 16-channel vocoded version of their name without prior exposure to vocoded speech, and without immediate exposure to the natural-speech version of their name. Dogs' name recognition in the second study was mediated by the number of phonemes in the dogs' name, suggesting that phonological context plays a role in degraded speech comprehension.


Subject(s)
Speech Perception , Speech , Humans , Animals , Dogs , Cues , Hearing , Linguistics
10.
J Clin Med ; 13(7)2024 Apr 06.
Article in English | MEDLINE | ID: mdl-38610891

ABSTRACT

BACKGROUND: Auditory neuropathy (AN) is a hearing disorder that affects neural activity in the VIIIth cranial nerve and central auditory pathways. Progressive forms have been reported in a number of neurodegenerative diseases and may occur as a result of both the deafferentiation and desynchronisation of neuronal processes. The purpose of this study was to describe changes in auditory function over time in a patient with axonal neuropathy and to explore the effect of auditory intervention. METHODS: We tracked auditory function in a child with progressive AN associated with Charcot-Marie-Tooth (Type 2C) disease, evaluating hearing levels, auditory-evoked potentials, and perceptual abilities over a 3-year period. Furthermore, we explored the effect of auditory intervention on everyday listening and neuroplastic development. RESULTS: While sound detection thresholds remained constant throughout, both electrophysiologic and behavioural evidence suggested auditory neural degeneration over the course of the study. Auditory brainstem response amplitudes were reduced, and perception of auditory timing cues worsened over time. Functional hearing ability (speech perception in noise) also deteriorated through the first 1.5 years of study until the child was fitted with a "remote-microphone" listening device, which subsequently improved binaural processing and restored speech perception ability to normal levels. CONCLUSIONS: Despite the deterioration of auditory neural function consistent with peripheral axonopathy, sustained experience with the remote-microphone listening system appeared to produce neuroplastic changes, which improved the patient's everyday listening ability-even when not wearing the device.

11.
Int J Audiol ; : 1-9, 2024 Mar 03.
Article in English | MEDLINE | ID: mdl-38432678

ABSTRACT

OBJECTIVE: Modelling the head-shadow effect compensation and speech recognition outcomes, we aimed to study the benefits of a bone conduction device (BCD) during the headband trial for single-sided deafened (SSD) subjects. DESIGN: This study is based on a database of individual patient measurements, fitting parameters, and acoustic BCD properties retrospectively measured on a skull simulator or from existing literature. The sensation levels of the Bone-Conduction and Air-Conduction sound paths were compared, modelling three spatial conditions with speech in quiet. We calculated the phoneme score using the Speech Intelligibility Index for the three conditions in quiet and seven in noise. STUDY SAMPLE: Eighty-five SSD adults fitted with BCD during headband trial. RESULTS: According to our model, most subjects did not achieve a full head-shadow effect compensation with the signal at the BCD side and in front. The modelled speech recognition in the quiet conditions did not improve with the BCD on the headband. In noise, we found a slight improvement in some specific conditions and minimal worsening in others. CONCLUSIONS: Based on an audibility model, this study challenges the fundamentals of a BCD headband trial in SSD subjects. Patients should be counselled regarding the potential outcome and alternative approaches.

12.
Elife ; 122024 Mar 12.
Article in English | MEDLINE | ID: mdl-38470243

ABSTRACT

Preserved communication abilities promote healthy ageing. To this end, the age-typical loss of sensory acuity might in part be compensated for by an individual's preserved attentional neural filtering. Is such a compensatory brain-behaviour link longitudinally stable? Can it predict individual change in listening behaviour? We here show that individual listening behaviour and neural filtering ability follow largely independent developmental trajectories modelling electroencephalographic and behavioural data of N = 105 ageing individuals (39-82 y). First, despite the expected decline in hearing-threshold-derived sensory acuity, listening-task performance proved stable over 2 y. Second, neural filtering and behaviour were correlated only within each separate measurement timepoint (T1, T2). Longitudinally, however, our results raise caution on attention-guided neural filtering metrics as predictors of individual trajectories in listening behaviour: neither neural filtering at T1 nor its 2-year change could predict individual 2-year behavioural change, under a combination of modelling strategies.


Humans are social animals. Communicating with other humans is vital for our social wellbeing, and having strong connections with others has been associated with healthier aging. For most humans, speech is an integral part of communication, but speech comprehension can be challenging in everyday social settings: imagine trying to follow a conversation in a crowded restaurant or decipher an announcement in a busy train station. Noisy environments are particularly difficult to navigate for older individuals, since age-related hearing loss can impact the ability to detect and distinguish speech sounds. Some aging individuals cope better than others with this problem, but the reason why, and how listening success can change over a lifetime, is poorly understood. One of the mechanisms involved in the segregation of speech from other sounds depends on the brain applying a 'neural filter' to auditory signals. The brain does this by aligning the activity of neurons in a part of the brain that deals with sounds, the auditory cortex, with fluctuations in the speech signal of interest. This neural 'speech tracking' can help the brain better encode the speech signals that a person is listening to. Tune and Obleser wanted to know whether the accuracy with which individuals can implement this filtering strategy represents a marker of listening success. Further, the researchers wanted to answer whether differences in the strength of the neural filtering observed between aging listeners could predict how their listening ability would develop, and determine whether these neural changes were connected with changes in people's behaviours. To answer these questions, Tune and Obleser used data collected from a group of healthy middle-aged and older listeners twice, two years apart. They then built mathematical models using these data to investigate how differences between individuals in the brain and in behaviours relate to each other. The researchers found that, across both timepoints, individuals with stronger neural filtering were better at distinguishing speech and listening. However, neural filtering strength measured at the first timepoint was not a good predictor of how well individuals would be able to listen two years later. Indeed, changes at the brain and the behavioural level occurred independently of each other. Tune and Obleser's findings will be relevant to neuroscientists, as well as to psychologists and audiologists whose goal is to understand differences between individuals in terms of listening success. The results suggest that neural filtering guided by attention to speech is an important readout of an individual's attention state. However, the results also caution against explaining listening performance based solely on neural factors, given that listening behaviours and neural filtering follow independent trajectories.


Subject(s)
Aging , Longevity , Adult , Humans , Brain , Auditory Perception , Benchmarking
13.
Front Psychol ; 15: 1373191, 2024.
Article in English | MEDLINE | ID: mdl-38550642

ABSTRACT

Introduction: A substantial amount of research from the last two decades suggests that infants' attention to the eyes and mouth regions of talking faces could be a supporting mechanism by which they acquire their native(s) language(s). Importantly, attentional strategies seem to be sensitive to three types of constraints: the properties of the stimulus, the infants' attentional control skills (which improve with age and brain maturation) and their previous linguistic and non-linguistic knowledge. The goal of the present paper is to present a probabilistic model to simulate infants' visual attention control to talking faces as a function of their language learning environment (monolingual vs. bilingual), attention maturation (i.e., age) and their increasing knowledge concerning the task at stake (detecting and learning to anticipate information displayed in the eyes or the mouth region of the speaker). Methods: To test the model, we first considered experimental eye-tracking data from monolingual and bilingual infants (aged between 12 and 18 months; in part already published) exploring a face speaking in their native language. In each of these conditions, we compared the proportion of total looking time on each of the two areas of interest (eyes vs. mouth of the speaker). Results: In line with previous studies, our experimental results show a strong bias for the mouth (over the eyes) region of the speaker, regardless of age. Furthermore, monolingual and bilingual infants appear to have different developmental trajectories, which is consistent with and extends previous results observed in the first year. Comparison of model simulations with experimental data shows that the model successfully captures patterns of visuo-attentional orientation through the three parameters that effectively modulate the simulated visuo-attentional behavior. Discussion: We interpret parameter values, and find that they adequately reflect evolution of strength and speed of anticipatory learning; we further discuss their descriptive and explanatory power.

14.
Q J Exp Psychol (Hove) ; : 17470218241242260, 2024 Apr 23.
Article in English | MEDLINE | ID: mdl-38485525

ABSTRACT

Knowledge of the underlying mechanisms of effortful listening could help to reduce cases of social withdrawal and mitigate fatigue, especially in older adults. However, the relationship between transient effort and longer term fatigue is likely to be more complex than originally thought. Here, we manipulated the presence/absence of monetary reward to examine the role of motivation and mood state in governing changes in perceived effort and fatigue from listening. In an online study, 185 participants were randomly assigned to either a "reward" (n = 91) or "no-reward" (n = 94) group and completed a dichotic listening task along with a series of questionnaires assessing changes over time in perceived effort, mood, and fatigue. Effort ratings were higher overall in the reward group, yet fatigue ratings in that group showed a shallower linear increase over time. Mediation analysis revealed an indirect effect of reward on fatigue ratings via perceived mood state; reward induced a more positive mood state which was associated with reduced fatigue. These results suggest that: (1) listening conditions rated as more "effortful" may be less fatiguing if the effort is deemed worthwhile, and (2) alterations to one's mood state represent a potential mechanism by which fatigue may be elicited during unrewarding listening situations.

15.
Neuropsychologia ; 198: 108866, 2024 Mar 20.
Article in English | MEDLINE | ID: mdl-38518889

ABSTRACT

Previous psychophysical and neurophysiological studies in young healthy adults have provided evidence that audiovisual speech integration occurs with a large degree of temporal tolerance around true simultaneity. To further determine whether audiovisual speech asynchrony modulates auditory cortical processing and neural binding in young healthy adults, N1/P2 auditory evoked responses were compared using an additive model during a syllable categorization task, without or with an audiovisual asynchrony ranging from 240 ms visual lead to 240 ms auditory lead. Consistent with previous psychophysical findings, the observed results converge in favor of an asymmetric temporal integration window. Three main findings were observed: 1) predictive temporal and phonetic cues from pre-phonatory visual movements before the acoustic onset appeared essential for neural binding to occur, 2) audiovisual synchrony, with visual pre-phonatory movements predictive of the onset of the acoustic signal, was a prerequisite for N1 latency facilitation, and 3) P2 amplitude suppression and latency facilitation occurred even when visual pre-phonatory movements were not predictive of the acoustic onset but the syllable to come. Taken together, these findings help further clarify how audiovisual speech integration partly operates through two stages of visually-based temporal and phonetic predictions.

16.
Cereb Cortex ; 34(3)2024 03 01.
Article in English | MEDLINE | ID: mdl-38494418

ABSTRACT

Listeners can use prior knowledge to predict the content of noisy speech signals, enhancing perception. However, this process can also elicit misperceptions. For the first time, we employed a prime-probe paradigm and transcranial magnetic stimulation to investigate causal roles for the left and right posterior superior temporal gyri (pSTG) in the perception and misperception of degraded speech. Listeners were presented with spectrotemporally degraded probe sentences preceded by a clear prime. To produce misperceptions, we created partially mismatched pseudo-sentence probes via homophonic nonword transformations (e.g. The little girl was excited to lose her first tooth-Tha fittle girmn wam expited du roos har derst cooth). Compared to a control site (vertex), inhibitory stimulation of the left pSTG selectively disrupted priming of real but not pseudo-sentences. Conversely, inhibitory stimulation of the right pSTG enhanced priming of misperceptions with pseudo-sentences, but did not influence perception of real sentences. These results indicate qualitatively different causal roles for the left and right pSTG in perceiving degraded speech, supporting bilateral models that propose engagement of the right pSTG in sublexical processing.


Subject(s)
Language , Speech , Humans , Female , Speech/physiology , Temporal Lobe , Transcranial Magnetic Stimulation , Noise
17.
Trends Hear ; 28: 23312165241229057, 2024.
Article in English | MEDLINE | ID: mdl-38483979

ABSTRACT

A practical speech audiometry tool is the digits-in-noise (DIN) test for hearing screening of populations of varying ages and hearing status. The test is usually conducted by a human supervisor (e.g., clinician), who scores the responses spoken by the listener, or online, where software scores the responses entered by the listener. The test has 24-digit triplets presented in an adaptive staircase procedure, resulting in a speech reception threshold (SRT). We propose an alternative automated DIN test setup that can evaluate spoken responses whilst conducted without a human supervisor, using the open-source automatic speech recognition toolkit, Kaldi-NL. Thirty self-reported normal-hearing Dutch adults (19-64 years) completed one DIN + Kaldi-NL test. Their spoken responses were recorded and used for evaluating the transcript of decoded responses by Kaldi-NL. Study 1 evaluated the Kaldi-NL performance through its word error rate (WER), percentage of summed decoding errors regarding only digits found in the transcript compared to the total number of digits present in the spoken responses. Average WER across participants was 5.0% (range 0-48%, SD = 8.8%), with average decoding errors in three triplets per participant. Study 2 analyzed the effect that triplets with decoding errors from Kaldi-NL had on the DIN test output (SRT), using bootstrapping simulations. Previous research indicated 0.70 dB as the typical within-subject SRT variability for normal-hearing adults. Study 2 showed that up to four triplets with decoding errors produce SRT variations within this range, suggesting that our proposed setup could be feasible for clinical applications.


Subject(s)
Speech Perception , Adult , Humans , Speech Reception Threshold Test , Audiometry, Speech , Noise , Hearing Tests
18.
Cochlear Implants Int ; : 1-9, 2024 Mar 21.
Article in English | MEDLINE | ID: mdl-38512716

ABSTRACT

OBJECTIVES: Cochlear implantation is the most effective treatment for patients with severe-to-profound sensorineural hearing loss. Much scientific work has been published since their inception. There is a need for a critical reflection on how and what we publish on cochlear implantation. METHODS: All Science Citation Index Expanded featured articles between 1980 and 2022 with the word 'cochlear implants' or 'cochlear implantation' were collected from the Web of Science database. Separate characteristics, such as the publication dates, the journals, the number of citations, the countries of origin, the authors, the institutions and co-occurring keywords, were assessed. RESULTS: 13,934 articles were included in the data analysis. The journals of of Otology and Neurotology, Ear and Hearing and of Pediatric Otorhinolaryngology represent the top three most publishing journals. Hannover Medical School, the University of Melbourne and the University of Northern Iowa represent the top three most publishing institutions. DISCUSSION: The amount of scientific publications on cochlear implant technology has increased for the last 40 years. Besides the focus on speech perception, the research landscape on cochlear implantation is broad and diverse. The number of countries and institutions contributing to these publications is limited. CONCLUSION: This bibliometric analysis serves as a quantitative overview of the research landscape on cochlear implantation.

19.
Front Pediatr ; 12: 1282952, 2024.
Article in English | MEDLINE | ID: mdl-38510079

ABSTRACT

Introduction: Children with early-identified unilateral hearing loss (UHL) might be at risk for delays in early speech and language, functional communication, psychosocial skills, and quality of life (QOL). However, a paucity of relevant research prohibits strong conclusions. This study aimed to provide new evidence relevant to this issue. Methods: Participants were 34 children, ages 9;0 to 12;7 (years;months), who were identified with UHL via newborn hearing screening. Nineteen children had been fitted with hearing devices, whereas 15 had not. Assessments included measures of speech perception and intelligibility; language and cognition; functional communication; psychosocial abilities; and QOL. Results and discussion: As a group, the children scored significantly below the normative mean and more than one standard deviation below the typical range on speech perception in spatially separated noise, and significantly below the normative mean on written passage comprehension. Outcomes in other aspects appear typical. There was however considerable within participant variation in the children's degree of hearing loss over time, raising the possibility that this pattern of results might change as children get older. The current study also revealed that participants with higher levels of nonverbal ability demonstrated better general language skills and better ability to comprehend written passages. By contrast, neither perception of speech in collocated noise nor fitting with a hearing device accounted for unique variance in outcome measures. Future research should, however, evaluate the fitting of hearing devices using random assignment of participants to groups in order to avoid any confounding influence of degree of hearing loss or children's past/current level of progress.

SELECTION OF CITATIONS
SEARCH DETAIL
...